On the Use of Non-Stationary Penalty Functions to Solve Nonlinear Constrained Optimization Problems with GA's
نویسندگان
چکیده
In this paper we discuss the use of non-stationary penalty functions to solve general nonlinear programming problems (NP) using real-valued GAS. The non-stationary penalty is a function of the generation number; as the number of generations increases so does the penalty. Therefore, as the penalty increases it puts more and more selective pressure on the GA to find a feasible solution. The ideas presented in this paper come from two basic areas: calculus-based nonlinear programming and simulated annealing. The non-stationary penalty methods are tested on four NP test cases and the effectiveness of these methods are reported..
منابع مشابه
On the Use of Non - Stationary Penalty Functions to SolveNonlinear Constrained Optimization Problems with GA ' sJe rey
In this paper we discuss the use of non-stationary penalty functions to solve general nonlinear programming problems (NP) using real-valued GAs. The non-stationary penalty is a function of the generation number; as the number of generations increases so does the penalty. Therefore, as the penalty increases it puts more and more selective pressure on the GA to nd a feasible solution. The ideas p...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملSuperlinearly convergent exact penalty projected structured Hessian updating schemes for constrained nonlinear least squares: asymptotic analysis
We present a structured algorithm for solving constrained nonlinear least squares problems, and establish its local two-step Q-superlinear convergence. The approach is based on an adaptive structured scheme due to Mahdavi-Amiri and Bartels of the exact penalty method of Coleman and Conn for nonlinearly constrained optimization problems. The structured adaptation also makes use of the ideas of N...
متن کاملAn ecient constraint handling method for genetic algorithms
Many real-world search and optimization problems involve inequality and/or equality constraints and are thus posed as constrained optimization problems. In trying to solve constrained optimization problems using genetic algorithms (GAs) or classical optimization methods, penalty function methods have been the most popular approach, because of their simplicity and ease of implementation. However...
متن کاملA Linesearch-Based Derivative-Free Approach for Nonsmooth Constrained Optimization
In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence towards stationary points, using the Clarke-Jahn directional derivative. In the second part, we consider inequal...
متن کامل